With the proliferation of deep generative models, deepfakes are improving in quality and quantity everyday. However, there are subtle authenticity signals in pristine videos, not replicated by SOTA GANs. We contrast the movement in deepfakes and authentic videos by motion magnification towards building a generalized deepfake source detector. The sub-muscular motion in faces has different interpretations per different generative models which is reflected in their generative residue. Our approach exploits the difference between real motion and the amplified GAN fingerprints, by combining deep and traditional motion magnification, to detect whether a video is fake and its source generator if so. Evaluating our approach on two multi-source datasets, we obtain 97.17% and 94.03% for video source detection. We compare against the prior deepfake source detector and other complex architectures. We also analyze the importance of magnification amount, phase extraction window, backbone network architecture, sample counts, and sample lengths. Finally, we report our results for different skin tones to assess the bias.
translated by 谷歌翻译
由于新型模型利用较大的数据集和新颖架构,通过生成模型创建的合成图像提高了质量和表现力。尽管这种质感主义是来自创意的角度的正副作用,但是当这种生成模型用于无同意时的冒充时,它会出现问题。这些方法中的大多数是基于源和目标对之间的部分传输,或者它们基于理想的分布生成完全新的样本,仍然类似于数据集中最接近的真实样本。我们提出Mixsyn(阅读为“Mixin”),用于从多种来源学习新的模糊组合物并将新颖的图像作为与组合物对应的图像区域的混合。 Mixsyn不仅将来自多个源掩码的不相关的区域与相干语义组成相结合,而且还生成了非现有图像的掩模感知的高质量重建。我们将Mixsyn与最先进的单源顺序发电和拼贴生成方法相比,在质量,多样性,现实主义和表现力方面;同时还展示了交互式合成,混合和匹配,以及编辑传播任务,没有掩码依赖性。
translated by 谷歌翻译